66 research outputs found

    On-line learning in multilayer neural networks

    Get PDF
    We present an analytic solution to the problem of on-line gradient-descent learning for two-layer neural networks with an arbitrary number of hidden units in both teacher and student networks. The technique, demonstrated here for the case of adaptive input-to-hidden weights, becomes exact as the dimensionality of the input space increases

    Many Attractors, Long Chaotic Transients, and Failure in Small-World Networks of Excitable Neurons

    Get PDF
    We study the dynamical states that emerge in a small-world network of recurrently coupled excitable neurons through both numerical and analytical methods. These dynamics depend in large part on the fraction of long-range connections or `short-cuts' and the delay in the neuronal interactions. Persistent activity arises for a small fraction of `short-cuts', while a transition to failure occurs at a critical value of the `short-cut' density. The persistent activity consists of multi-stable periodic attractors, the number of which is at least on the order of the number of neurons in the network. For long enough delays, network activity at high `short-cut' densities is shown to exhibit exceedingly long chaotic transients whose failure-times averaged over many network configurations follow a stretched exponential. We show how this functional form arises in the ensemble-averaged activity if each network realization has a characteristic failure-time which is exponentially distributed.Comment: 14 pages 23 figure

    Macroscopic Dynamics of Neural Networks with Heterogeneous Spiking Thresholds

    Full text link
    Mean-field theory links the physiological properties of individual neurons to the emergent dynamics of neural population activity. These models provide an essential tool for studying brain function at different scales; however, for their application to neural populations on large scale, they need to account for differences between distinct neuron types. The Izhikevich single neuron model can account for a broad range of different neuron types and spiking patterns, thus rendering it an optimal candidate for a mean-field theoretic treatment of brain dynamics in heterogeneous networks. Here, we derive the mean-field equations for networks of all-to-all coupled Izhikevich neurons with heterogeneous spiking thresholds. Using methods from bifurcation theory, we examine the conditions under which the mean-field theory accurately predicts the dynamics of the Izhikevich neuron network. To this end, we focus on three important features of the Izhikevich model that are subject here to simplifying assumptions: (i) spike-frequency adaptation, (ii) the spike reset conditions, and (iii) the distribution of single-cell spike thresholds across neurons. Our results indicate that, while the mean-field model is not an exact model of the Izhikevich network dynamics, it faithfully captures its different dynamic regimes and phase transitions. We thus present a mean-field model that can represent different neuron types and spiking dynamics. The model is comprised of biophysical state variables and parameters, incorporates realistic spike resetting conditions, and accounts for heterogeneity in neural spiking thresholds. These features allow for a broad applicability of the model as well as for a direct comparison to experimental data.Comment: 13 pages, 4 figure

    On-line learning from clustered input examples

    Get PDF

    Learning with noise and regularizers in multilayer neural networks

    Get PDF
    We study the effect of two types of noise, data noise and model noise, in an on-line gradient-descent learning scenario for general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units. Data is then corrupted by Gaussian noise affecting either the output or the model itself. We examine the effect of both types of noise on the evolution of order parameters and the generalization error in various phases of the learning process

    Dynamics of on-line gradient descent learning for multilayer neural networks

    Get PDF
    We consider the problem of on-line gradient descent learning for general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process

    Rewiring Neural Interactions by Micro-Stimulation

    Get PDF
    Plasticity is a crucial component of normal brain function and a critical mechanism for recovery from injury. In vitro, associative pairing of presynaptic spiking and stimulus-induced postsynaptic depolarization causes changes in the synaptic efficacy of the presynaptic neuron, when activated by extrinsic stimulation. In vivo, such paradigms can alter the responses of whole groups of neurons to stimulation. Here, we used in vivo spike-triggered stimulation to drive plastic changes in rat forelimb sensorimotor cortex, which we monitored using a statistical measure of functional connectivity inferred from the spiking statistics of the neurons during normal, spontaneous behavior. These induced plastic changes in inferred functional connectivity depended on the latency between trigger spike and stimulation, and appear to reflect a robust reorganization of the network. Such targeted connectivity changes might provide a tool for rerouting the flow of information through a network, with implications for both rehabilitation and brain–machine interface applications
    • …
    corecore